13 research outputs found

    Unsupervised Speech Representation Pooling Using Vector Quantization

    Full text link
    With the advent of general-purpose speech representations from large-scale self-supervised models, applying a single model to multiple downstream tasks is becoming a de-facto approach. However, the pooling problem remains; the length of speech representations is inherently variable. The naive average pooling is often used, even though it ignores the characteristics of speech, such as differently lengthed phonemes. Hence, we design a novel pooling method to squash acoustically similar representations via vector quantization, which does not require additional training, unlike attention-based pooling. Further, we evaluate various unsupervised pooling methods on various self-supervised models. We gather diverse methods scattered around speech and text to evaluate on various tasks: keyword spotting, speaker identification, intent classification, and emotion recognition. Finally, we quantitatively and qualitatively analyze our method, comparing it with supervised pooling methods

    Progressive Few-Shot Adaptation of Generative Model with Align-Free Spatial Correlation

    No full text
    In few-shot generative model adaptation, the model for target domain is prone to the mode-collapse. Recent studies attempted to mitigate the problem by matching the relationship among samples generated from the same latent codes in source and target domains. The objective is further extended to image patch-level to transfer the spatial correlation within an instance. However, the patch-level approach assumes the consistency of spatial structure between source and target domains. For example, the positions of eyes in two domains are almost identical. Thus, it can bring visual artifacts if source and target domain images are not nicely aligned. In this paper, we propose a few-shot generative model adaptation method free from such assumption, based on a motivation that generative models are progressively adapting from the source domain to the target domain. Such progressive changes allow us to identify semantically coherent image regions between instances generated by models at a neighboring training iteration to consider the spatial correlation. We also propose an importance-based patch selection strategy to reduce the complexity of patch-level correlation matching. Our method shows the state-of-the-art few-shot domain adaptation performance in the qualitative and quantitative evaluations

    Integrative Technical, Economic, and Environmental Feasibility Analysis for Ethane Steam Reforming in a Membrane Reactor for H-2 Production

    No full text
    Integrative technoeconomic analysis (iTEA) with process simulation and economic analysis and integrative carbon footprint analysis (iCFA) of ethane steam reforming (ESR) for H-2 production in a membrane reactor (MR) was carried out to identify its technical, economic, and environmental feasibility in Korea. From process simulation, process conditions were set to perform economic analysis through itemized cost estimation, sensitivity analysis, and probability analysis using a Monte-Carlo simulation method. The respective H-2 production costs of 3.678 and 2.926kgH(2)(1)wereobtainedinaconventionalreactorandanMRandmaineconomicitemsofreactantandlaborwereconfirmedusingsensitivityanalysis.AH2sellingpriceof4.091 kgH(2)(-1) were obtained in a conventional reactor and an MR and main economic items of reactant and labor were confirmed using sensitivity analysis. A H-2 selling price of 4.091 kgH(2)(-1), assuming an onsite H-2 refueling station using proposed ESR, was obtained, which can be compared with the current and future H-2 selling prices from H-2 refueling stations, the United States Department of Energy, and the projected H-2 selling prices based on Hydrogen Roadmap from the Korean government. In addition, cumulative probability curves and frequency graphs for H-2 selling price by varying main economic items were created to show the uncertainty of the proposed ESR through probability analysis. Moreover, the reduction in the CO2 emission rate of about 13.3% was obtained in an MR through the CO2 flow diagram of iCFA

    The Spatial and Temporal Structure of Extreme Rainfall Trends in South Korea

    No full text
    The spatial and temporal structures of extreme rainfall trends in South Korea are investigated in the current study. The trends in the annual maximum rainfall series are detected and their spatial distribution is analyzed. The scaling exponent is employed as an index representing the temporal structure. The temporal structure of the annual maximum series is calculated and spatially analyzed. Subsequently, the block bootstrap based Mann-Kendall test is employed detect the trend in the scaling exponent series subsampled by the annual maximum rainfalls using a moving window. Significant trends are detected in a small number of stations and there are no significant trends in many stations for the annual maximum rainfall series. There is a large variability in the temporal structures of the extreme rainfall events. Additionally, the variations of the scaling exponent estimates for each month within a rainy season are larger than the variation of the scaling exponent estimates on an annual basis. Significant trends in the temporal structures are observed at many stations unlike the trend test results of annual maximum rainfall series. Decreasing trends are observed at many stations located in the coastal area, while increasing trends are observed in the inland area

    Mixture Gumbel models for extreme series including infrequent phenomena

    No full text
    A Gumbel mixture distribution is proposed for modelling extreme events from two different mechanisms: one phenomenon occurring annually and one occurring infrequently. A new Monte Carlo simulation procedure is presented and used to assess the consequence of fitting traditional Gumbel or GEV models to annual maximum series originating from two different populations. The results show that mixture models are preferred to single-population models when the two populations are very different. The Gumbel mixture model was applied to annual maximum 24-hour rainfall data from 64 South Korean raingauges, split into events generated by typhoon and non-typhoon rainfall. The results show that the use of a mixture model provides a more accurate description of the observed data than the Gumbel distribution, but is comparable to the GEV model. The theoretical and practical results highlight the need for more robust methods for identifying the underlying populations before mixture models can be recommended

    Deterministic and stochastic economic analysis based on historical natural gas and CO 2 allowance prices for steam reforming of methanol

    No full text
    As an appropriate hydrogen supply system for high temperature polymer electrolyte membrane fuel cells, steam reforming of methanol (SRM)is proposed because reformate gas including low quality H 2 with some impurities can be directly used as the fuel. In this study, we compare two SRM systems with and without the use of tail gas from pressure swing adsorption and comprehensive economic analysis is performed for the SRM process with the use of tail gas to estimate a unit H 2 production cost (C UHP )based on historical data of natural gas and the CO 2 allowance prices for Korea, Europe, Western Climate Initiative (WCI), and Regional Greenhouse Gas Initiative (RGGI). From deterministic economic analysis with a H 2 production capacity of 700 m 3 h ???1 , the C UHP values are 6.88 kgH2???1forKorea,6.84 kgH 2 ???1 for Korea, 6.84 kgH 2 ???1 for Europe, 6.50 kgH2???1forWCI,and6.54 kgH 2 ???1 for WCI, and 6.54 kgH 2 ???1 for RGGI, respectively. Furthermore, additional two scenarios, pessimistic and optimistic ones (Scenario 1 and 2, respectively), are investigated to evaluate the current level of C UHP values for H 2 production capacities from 30 to 700 m 3 h ???1 . With a Monte-Carlo simulation method, stochastic economic analysis is carried out to predict the ranges of potential C UHP values for each region. The C UHP values for WCI and RGGI show the narrow distributions meaning lower change of C UHP ; however, a wide range of distribution for Korea and Europe suggest that there is considerably higher variation, demonstrating the necessity of stochastic uncertainty analysis when assessing the economic feasibility

    On the structure of Si(100) surface:importance of higher order correlations for buckled dimer

    No full text
    We revisit a dangling theoretical question of whether the surface reconstruction of the Si(100) surface would energetically favor the symmetric or buckled dimers on the intrinsic potential energy surfaces at 0 K. This seemingly simple question is still unanswered definitively since all existing density functional based calculations predict the dimers to be buckled, while most wavefunction based correlated treatments prefer the symmetric configurations. Here, we use the doubly hybrid density functional (DHDF) geometry optimizations, in particular, XYGJ-OS, complete active space self-consistent field theory, multi-reference perturbation theory, multi-reference configuration interaction (MRCI), MRCI with the Davidson correction (MRCI + Q), multi-reference average quadratic CC (MRAQCC), and multi-reference average coupled pair functional (MRACPF) methods to address this question. The symmetric dimers are still shown to be lower in energy than the buckled dimers when using the CASPT2 method on the DHDF optimized geometries, consistent with the previous results using B3LYP geometries [Y. Jung, Y. Shao, M. S. Gordon, D. J. Doren, and M. Head-Gordon, J. Chem. Phys. 119, 10917 (2003)10.1063/1.1620994]. Interestingly, however, the MRCI + Q, MRAQCC, and MRACPF results (which give a more refined description of electron correlation effects) suggest that the buckled dimer is marginally more stable than its symmetric counterpart. The present study underlines the significance of having an accurate description of the electron-electron correlation as well as proper multi-reference wave functions when exploring the extremely delicate potential energy surfaces of the reconstructed Si(100) surface

    Probability Distributions for a Quantile Mapping Technique for a Bias Correction of Precipitation Data: A Case Study to Precipitation Data Under Climate Change

    No full text
    The quantile mapping method is a bias correction method that leads to a good performance in terms of precipitation. Selecting an appropriate probability distribution model is essential for the successful implementation of quantile mapping. Probability distribution models with two shape parameters have proved that they are fit for precipitation modeling because of their flexibility. Hence, the application of a two-shape parameter distribution will improve the performance of the quantile mapping method in the bias correction of precipitation data. In this study, the applicability and appropriateness of two-shape parameter distribution models are examined in quantile mapping, for a bias correction of simulated precipitation data from a climate model under a climate change scenario. Additionally, the impacts of distribution selection on the frequency analysis of future extreme precipitation from climate are investigated. Generalized Lindley, Burr XII, and Kappa distributions are used, and their fits and appropriateness are compared to those of conventional distributions in a case study. Applications of two-shape parameter distributions do lead to better performances in reproducing the statistical characteristics of observed precipitation, compared to those of conventional distributions. The Kappa distribution is considered the best distribution model, as it can reproduce reliable spatial dependences of the quantile corresponding to a 100-year return period, unlike the gamma distribution
    corecore